skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Forlizzi, J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Lewkowicz, M; Schmidt, K (Ed.)
    This paper draws on Michel de Certeau’s notion of "tactics" to explore the use of data in labor organizing research in CSCW [? ]. Taking a historical view, we first analyze a set of cases from 20th century US labor history that offer three distinct lenses on the risks of data-based advocacy campaigns: wagers, compromises, and concessions. Across our cases, we frame reformers’ use of data tactics as a rhetorical move, taken to advance incremental worker gains under conditions of precarity [90, 105, 154]. However, by continuing to rely on certain data-based arguments in the short term, we argue that labor reformers may have limited the frame of debate for broader arguments necessary to improve conditions in the long-term. These tensions follow us into data-based advocacy research in the present, such as the emerging "digital workerism" movement [70]. To ensure the continuation of responsible advocacy research in CSCW, we offer insights from social justice movements to suggest how members of the HCI and CSCW communities can work more intentionally alongside (or without) data methods to support worker-led direct action. 
    more » « less
    Free, publicly-accessible full text available October 31, 2026
  2. How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products. 
    more » « less
  3. Modern advances in AI have increased employer interest in tracking workers’ biometric signals — e.g., their brainwaves and facial expressions — to evaluate and make predictions about their performance and productivity. These technologies afford managers information about internal emotional and physiological states that were previously accessible only to individual workers, raising new concerns around worker privacy and autonomy. Yet, the research literature on the impact of AI-powered biometric work monitoring (AI-BWM) technologies on workers remains fragmented across disciplines and industry sectors, limiting our understanding of its impacts on workers at large. In this paper, we sytematically review 129 papers, spanning varied disciplines and industry sectors, that discuss and analyze the impact of AI-powered biometric monitoring technologies in occupational settings. We situate this literature across a process model that spans the development, deployment, and usage phases of these technologies. We further draw on Shelby et al.’s Taxonomy of Socio-technical Harms in AI systems to systematize the harms experienced by workers across the three phases of our process model. We find that the development, deployment, and sustained use of AI-powered biometric work monitoring technologies put workers at risk of a number of the socio-technical harms specified by Shelby et al.: e.g., by forcing workers to exert additional emotional labor to avoid flagging unreliable affect monitoring systems, or through the use of these data to make inferences about productivity. Our research contributes to the field of critical AI studies by highlighting the potential for a cascade of harms to occur when the impact of these technologies on workers is not considered at all phases of our process model. 
    more » « less
  4. Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by an- alyzing 321 documented AI privacy incidents. We codifed how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, diferential pri- vacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI. 
    more » « less